Goto

Collaborating Authors

 ai bias


FAIRTOPIA: Envisioning Multi-Agent Guardianship for Disrupting Unfair AI Pipelines

Vakali, Athena, Dimitriadis, Ilias

arXiv.org Artificial Intelligence

AI models have become active decision makers, often acting without human supervision. The rapid advancement of AI technology has already caused harmful incidents that have hurt individuals and societies and AI unfairness in heavily criticized. It is urgent to disrupt AI pipelines which largely neglect human principles and focus on computational biases exploration at the data (pre), model(in), and deployment (post) processing stages. We claim that by exploiting the advances of agents technology, we will introduce cautious, prompt, and ongoing fairness watch schemes, under realistic, systematic, and human-centric fairness expectations. We envision agents as fairness guardians, since agents learn from their environment, adapt to new information, and solve complex problems by interacting with external tools and other systems. To set the proper fairness guardrails in the overall AI pipeline, we introduce a fairness-by-design approach which embeds multi-role agents in an end-to-end (human to AI) synergetic scheme. Our position is that we may design adaptive and realistic AI fairness frameworks, and we introduce a generalized algorithm which can be customized to the requirements and goals of each AI decision making scenario. Our proposed, so called FAIRTOPIA framework, is structured over a three-layered architecture, which encapsulates the AI pipeline inside an agentic guardian and a knowledge-based, self-refining layered scheme. Based on our proposition, we enact fairness watch in all of the AI pipeline stages, under robust multi-agent workflows, which will inspire new fairness research hypothesis, heuristics, and methods grounded in human-centric, systematic, interdisciplinary, socio-technical principles.


AI Biases as Asymmetries: A Review to Guide Practice

Waters, Gabriella, Honenberger, Phillip

arXiv.org Artificial Intelligence

AI Biases as Asymmetries: A Review to Guide Practice Gabriella Waters (CEAMLS, Morgan State University)* Phillip Honenberger (CEAMLS, Morgan State University)* *Equal contribution [Preprint - Nov. 21, 2024] Abstract The understanding of bias in AI is currently undergoing a revolution. Initially understood as errors or flaws, biases are increasingly recognized as integral to AI systems and sometimes preferable to less biased alternatives. In this paper we review the reasons for this changed understanding and provide new guidance on two questions: First, how should we think about and measure biases in AI systems, consistent with the new understanding? Second, what kinds of bias in an AI system should we accept or even amplify, and what kinds should we minimize or eliminate, and why? The key to answering both questions, we argue, is to understand biases as "violations of a symmetry standard" (following Kelly). We distinguish three main types of asymmetry in AI systems - error biases, inequality biases, and process biases - and highlight places in the pipeline of AI development and application where bias of each type is likely to be good, bad, or inevitable. Introduction The understanding of bias in AI is currently undergoing a revolution. Initially perceived as errors or flaws, biases are increasingly recognized as integral to AI systems and sometimes preferable to less biased alternatives. Cognitive psychology and statistics have informed this shift, highlighting the benefits and costs of biases in decision-making processes. Cognitive psychology presents biases as often helpful in making decisions under conditions of uncertainty. Similarly, statistical methods acknowledge biases as often useful and sometimes necessary for making inferences from data. These insights have been instrumental in redefining biases as not inherently negative, but as sometimes essential components that can and should be harnessed to improve AI systems.


Rolling in the deep of cognitive and AI biases

Vakali, Athena, Tantalaki, Nicoleta

arXiv.org Artificial Intelligence

Nowadays, we delegate many of our decisions to Artificial Intelligence (AI) that acts either in solo or as a human companion in decisions made to support several sensitive domains, like healthcare, financial services and law enforcement. AI systems, even carefully designed to be fair, are heavily criticized for delivering misjudged and discriminated outcomes against individuals and groups. Numerous work on AI algorithmic fairness is devoted on Machine Learning pipelines which address biases and quantify fairness under a pure computational view. However, the continuous unfair and unjust AI outcomes, indicate that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed. Although, the synergy of humans and machines seems imperative to make AI work, the significant impact of human and societal factors on AI bias is currently overlooked. We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview. Inspired by the cognitive science definition and taxonomy of human heuristics, we identify how harmful human actions influence the overall AI lifecycle, and reveal human to AI biases hidden pathways. We introduce a new mapping, which justifies the human heuristics to AI biases reflections and we detect relevant fairness intensities and inter-dependencies. We envision that this approach will contribute in revisiting AI fairness under deeper human-centric case studies, revealing hidden biases cause and effects.


Human-AI Interactions and Societal Pitfalls

Castro, Francisco, Gao, Jian, Martin, Sébastien

arXiv.org Artificial Intelligence

Generative artificial intelligence (AI) systems, particularly large language models (LLMs), have improved at a rapid pace. For example, ChatGPT recently showcased its advanced capacity to perform complex tasks and human-like behaviors (OpenAI 2023b), reaching 100 million users within two months of its 2022 launch (Hu 2023). This progress is not limited to text generation, as demonstrated by other recent generative AI systems such as Midjourney (Midjourney 2023) (a text-to-image generative AI) and GitHub Copilot (Github 2023) (an AI pair programmer that can autocomplete code). Eloundou et al. (2023) estimated that about 80% of the U.S. workforce could be affected by the introduction of LLMs, and 19% of the workers may have at least 50% of their tasks impacted. In particular, AI can make users more productive by generating complex content in seconds, while users can simply communicate their preferences. For example, Noy and Zhang (2023) highlighted that ChatGPT can substantially improve productivity in writing tasks, and GitHub claims that Copilot increases developer productivity by up to 55% (Kalliamvakou 2023). However, content generated with the help of AI is not exactly the same as content generated without AI. The boost in productivity may come at the expense of users' idiosyncrasies, such as personal style and tastes, preferences we would naturally express without AI. To let users express their preferences, many AI systems let users edit their prompt (e.g., Midjourney) or allow more


AI bias might not be a threat and here's why

FOX News

OpenAI, developer of the ChatGPT language model, is the best-funded and largest AI platform company with over $10 billion in funding at a valuation of nearly $30 billion. Microsoft uses OpenAI, but Google, Meta, Apple and Amazon have their own AI platforms and there are hundreds of other AI startups in Silicon Valley. Will industry forces drive one of these to become a monopoly? When Google started its search business, there were already a dozen existing search platforms such as Yahoo, AltaVista, Excite and InfoSeek. Many observers asked if we need Google as yet another search engine?


Troubling trend of woke AI is a big threat to free speech

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Have you ever seen the YouTube video of the young boy at Christmas unwrapping a Nintendo 64 and completely freaking out with excitement? And that kid was me! My peak experiences as a kid always coincided with groundbreaking technology launches.



5 AI Articles We Almost Forgot We Love

#artificialintelligence

Oops! Valentine's Day came and went. In this belated Valentine's Day post, TechBeacon presents five unforgettable articles on AI that we love.


Bias in AI and Machine Learning: Sources and Solutions - Lexalytics

#artificialintelligence

"Bias in AI" has long been a critical area of research and concern in machine learning circles and has grown in awareness among general consumer audiences over the past couple of years as knowledge of AI has grown. It's a term that describes situations where ML-based data analytics systems show bias against certain groups of people. These biases usually reflect widespread societal biases about race, gender, biological sex, age, and culture. There are two types of bias in AI. One is algorithmic AI bias or "data bias," where algorithms are trained using biased data.


We need to build better bias in AI

#artificialintelligence

Check out all the on-demand sessions from the Intelligent Security Summit here. At their best, AI systems extend and augment the work we do, helping us to realize our goals. At their worst, they undermine them. We've all heard of high-profile instances of AI bias, like Amazon's machine learning (ML) recruitment engine that discriminated against women or the racist results from Google Vision. These cases don't just harm individuals; they work against their creators' original intentions.